skip to main content


Search for: All records

Creators/Authors contains: "Yu, Stella X."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In high seismic risk regions, it is important for city managers and decision makers to create programs to mitigate the risk for buildings. For large cities and regions, a mitigation program relies on accurate information of building stocks, that is, a database of all buildings in the area and their potential structural defects, making them vulnerable to strong ground shaking. Structural defects and vulnerabilities could manifest via the building's appearance. One such example is the soft‐story building—its vertical irregularity is often observable from the facade. This structural type can lead to severe damage or even collapse during moderate or severe earthquakes. Therefore, it is critical to screen large building stock to find these buildings and retrofit them. However, it is usually time‐consuming to screen soft‐story structures by conventional methods. To tackle this issue, we used full image classification to screen them out from street view images in our previous study. However, full image classification has difficulties locating buildings in an image, which leads to unreliable predictions. In this paper, we developed an automated pipeline in which we segment street view images to identify soft‐story buildings. However, annotated data for this purpose is scarce. To tackle this issue, we compiled a dataset of street view images and present a strategy for annotating these images in a semi‐automatic way. The annotated dataset is then used to train an instance segmentation model that can be used to detect all soft‐story buildings from unseen images.

     
    more » « less
  2. Abstract

    The implementation of intelligent software to identify and classify objects and individuals in visual fields is a technology of growing importance to operatives in many fields, including wildlife conservation and management. To non-experts, the methods can be abstruse and the results mystifying. Here, in the context of applying cutting edge methods to classify wildlife species from camera-trap data, we shed light on the methods themselves and types of features these methods extract to make efficient identifications and reliable classifications. The current state of the art is to employ convolutional neural networks (CNN) encoded within deep-learning algorithms. We outline these methods and present results obtained in training a CNN to classify 20 African wildlife species with an overall accuracy of 87.5% from a dataset containing 111,467 images. We demonstrate the application of a gradient-weighted class-activation-mapping (Grad-CAM) procedure to extract the most salient pixels in the final convolution layer. We show that these pixels highlight features in particular images that in some cases are similar to those used to train humans to identify these species. Further, we used mutual information methods to identify the neurons in the final convolution layer that consistently respond most strongly across a set of images of one particular species. We then interpret the features in the image where the strongest responses occur, and present dataset biases that were revealed by these extracted features. We also used hierarchical clustering of feature vectors (i.e., the state of the final fully-connected layer in the CNN) associated with each image to produce a visual similarity dendrogram of identified species. Finally, we evaluated the relative unfamiliarity of images that were not part of the training set when these images were one of the 20 species “known” to our CNN in contrast to images of the species that were “unknown” to our CNN.

     
    more » « less